Simon Willison

3 items

Fortune 2026-04-25-3

Cursor used a swarm of AI agents powered by OpenAI to build and run a web browser for a week—with no human help

Every AI headline reports the model that did the work. Wrong unit of analysis. GPT-5.2 didn't build a browser; Cursor's planner-worker-judge harness built one using GPT-5.2 as substrate. Value accrues to whoever owns the orchestration layer, not to whoever trained the weights.

Lenny's Podcast 2026-04-05-1

An AI State of the Union: We've Passed the Inflection Point & Dark Factories Are Coming

Willison's practitioner evidence confirms the November inflection is real: coding agents crossed from "mostly works" to "almost always does what you told it to do," enabling 95% AI-written code for skilled engineers. The buried signal: productivity gains plateau at human cognitive limits, not tool limits. Running four parallel agents produces burnout by 11am, and the trust signals we've relied on for decades (docs, tests, stars) are now generated in minutes, indistinguishable from battle-tested software. The dark factory pattern (nobody writes code AND nobody reads code) is fascinating but premature: N=1 case study, $10K/day QA costs, zero production outcome data.

Simon Willison's Weblog 2026-03-08-2

Can coding agents relicense open source through a "clean room" implementation of code?

Coding agents can now reimplement GPL codebases against test suites in hours, making copyleft economically unenforceable. The chardet LGPL→MIT relicensing dispute is the first clean test case, but the real bomb is training data contamination: if the model was trained on the original code, no "clean room" claim holds. Generalizes to any governance mechanism that relies on cost-of-reimplementation as friction.